Model bias triggered by long-tailed data has been widely studied. However, measure based on the number of samples cannot explicate three phenomena simultaneously: (1) Given enough data, the classification performance gain is marginal with additional samples. (2) Classification performance decays precipitously as the number of training samples decreases when there is insufficient data. (3) Model trained on sample-balanced datasets still has different biases for different classes. In this work, we define and quantify the semantic scale of classes, which is used to measure the feature diversity of classes. It is exciting to find experimentally that there is a marginal effect of semantic scale, which perfectly describes the first two phenomena. Further, the quantitative measurement of semantic scale imbalance is proposed, which can accurately reflect model bias on multiple datasets, even on sample-balanced data, revealing a novel perspective for the study of class imbalance. Due to the prevalence of semantic scale imbalance, we propose semantic-scale-balanced learning, including a general loss improvement scheme and a dynamic re-weighting training framework that overcomes the challenge of calculating semantic scales in real-time during iterations. Comprehensive experiments show that dynamic semantic-scale-balanced learning consistently enables the model to perform superiorly on large-scale long-tailed and non-long-tailed natural and medical datasets, which is a good starting point for mitigating the prevalent but unnoticed model bias.
translated by 谷歌翻译
影响最大化是挖掘社交网络深入信息的关键问题,该信息旨在选择从网络中选择种子以最大程度地增加受影响的节点的数量。为了评估种子套装的影响,现有的努力提出了拟议的代理模型(转换),以较低的计算成本来代替昂贵的蒙特卡洛模拟过程。这些基于网络先验知识的替代转换从各个角度引起具有相似特征的不同搜索行为。对于特定情况,用户很难先验确定合适的转换。在本文中,我们提出了一个多种转化的进化框架,以进行影响最大化(MTEFIM),并保证了融合保证,以利用替代转换的潜在相似性和独特的优势,并避免用户手动确定最合适的转换。在MTEFIM中,将多个转换同时优化为多个任务。每个转换均分配一个进化求解器。进行了MTEFIM的三个主要组成部分:1)根据不同人群的个人(种子集)重叠程度估算转化之间的潜在关系,2)根据转变关系,将个体转移到跨种群中,3)选择最终输出种子集,包含所有代理模型知识。 MTEFIM的有效性在基准和现实世界社交网络上得到了验证。实验结果表明,与几种流行的IM特异性方法相比,MTEFIM可以有效地利用跨多个转换的潜在转移知识,以实现高度竞争性能。可以在https://github.com/xiaofangxd/mtefim上访问MTEFIM的实现。
translated by 谷歌翻译
最近,已经成功地应用于各种遥感图像(RSI)识别任务的大量基于深度学习的方法。然而,RSI字段中深度学习方法的大多数现有进步严重依赖于手动设计的骨干网络提取的特征,这严重阻碍了由于RSI的复杂性以及先前知识的限制而受到深度学习模型的潜力。在本文中,我们研究了RSI识别任务中的骨干架构的新设计范式,包括场景分类,陆地覆盖分类和对象检测。提出了一种基于权重共享策略和进化算法的一拍架构搜索框架,称为RSBNet,其中包括三个阶段:首先,在层面搜索空间中构造的超空网是在自组装的大型中预先磨削 - 基于集合单路径培训策略进行缩放RSI数据集。接下来,预先培训的SuperNet通过可切换识别模块配备不同的识别头,并分别在目标数据集上进行微调,以获取特定于任务特定的超网络。最后,我们根据没有任何网络训练的进化算法,搜索最佳骨干架构进行不同识别任务。对于不同识别任务的五个基准数据集进行了广泛的实验,结果显示了所提出的搜索范例的有效性,并证明搜索后的骨干能够灵活地调整不同的RSI识别任务并实现令人印象深刻的性能。
translated by 谷歌翻译
由于鸟瞰视角的任意对象方向和复杂的背景,航空图像中的船舶检测仍然是一个活跃但具有挑战性的任务。大多数现有方法依赖于角度预测或预定义的锚盒,使这些方法对不稳定的角度回归和过度的超参数设置非常敏感。为了解决这些问题,我们用锚角和角度范例替换基于角的对象编码,并提出了一种部署中心的新型探测器,用于编码每个定向对象的四个中点,即MIDnet。 MIDNET设计用于增强船舶中点的对称可变形卷积,然后通过预测相应的离心移位和匹配半径来自适应地匹配相同的船的中心和中点。最后,提出了一种简洁的分析几何算法,以逐步优化中心和中点 - 明智地为建立精确定向的边界盒。在两艘公共船舶检测数据集,HRSC2016和FGSD2021,MIDNet通过实现90.52%和86.50%的AP来实现最先进的探测器。此外,MIDNET在DotA的船舶检测中获得竞争结果。
translated by 谷歌翻译
现有的锚定面向对象检测方法已经实现了惊人的结果,但这些方法需要一些手动预设盒,这引入了额外的超参数和计算。现有的锚定方法通常具有复杂的架构,并且不易部署。我们的目标是提出一种简单易于部署的空中图像检测算法。在本文中,我们介绍了基于FCOS的单级锚定旋转对象检测器(FCOSR),可以在大多数平台上部署。 FCOSR具有简单的架构,包括卷积图层。我们的工作侧重于培训阶段的标签分配策略。我们使用椭圆中心采样方法来定义面向定向框(obb)的合适采样区域。模糊样本分配策略为重叠对象提供合理的标签。为解决采样问题不足,设计了一种多级采样模块。这些策略将更合适的标签分配给培训样本。我们的算法分别在DOTA1.0,DOTA1.5和HRSC2016数据集上实现79.25,75.41和90.15映射。 FCOSR在单规模评估中展示了其他方法的卓越性能。我们将轻量级FCOSR模型转换为Tensorrt格式,该格式在Dota1.0上以10.68 fps在jetson Xavier NX上实现73.93映射。该代码可用于:https://github.com/lzh420202/fcosr
translated by 谷歌翻译
基于稀疏的代表的分类(SRC)通过将识别问题作为简单的线性回归问题铸造了很多关注。然而,SRC方法仍然仅限于每类别的足够标记的样本,不充分使用未标记的样本,以及表示的不稳定性。为了解决这些问题,提出了一种未标记的数据驱动的逆投影伪全空间表示的基于空间表示的分类模型,具有低级稀疏约束。所提出的模型旨在挖掘所有可用数据的隐藏语义信息和内在结构信息,这适用于少量标记的样本和标记样本与正面识别中的未标记样本问题之间的比例不平衡。引入了混合的高斯Seidel和Jacobian Admm算法来解决模型。分析了模型的收敛性,表示能力和稳定性。在三个公共数据集上的实验表明,所提出的LR-S-PFSRC模型达到稳定的结果,特别是对于样品的比例不平衡。
translated by 谷歌翻译
Embedding words in vector space is a fundamental first step in state-of-the-art natural language processing (NLP). Typical NLP solutions employ pre-defined vector representations to improve generalization by co-locating similar words in vector space. For instance, Word2Vec is a self-supervised predictive model that captures the context of words using a neural network. Similarly, GLoVe is a popular unsupervised model incorporating corpus-wide word co-occurrence statistics. Such word embedding has significantly boosted important NLP tasks, including sentiment analysis, document classification, and machine translation. However, the embeddings are dense floating-point vectors, making them expensive to compute and difficult to interpret. In this paper, we instead propose to represent the semantics of words with a few defining words that are related using propositional logic. To produce such logical embeddings, we introduce a Tsetlin Machine-based autoencoder that learns logical clauses self-supervised. The clauses consist of contextual words like "black," "cup," and "hot" to define other words like "coffee," thus being human-understandable. We evaluate our embedding approach on several intrinsic and extrinsic benchmarks, outperforming GLoVe on six classification tasks. Furthermore, we investigate the interpretability of our embedding using the logical representations acquired during training. We also visualize word clusters in vector space, demonstrating how our logical embedding co-locate similar words.
translated by 谷歌翻译
Tsetlin Machine (TM) has been gaining popularity as an inherently interpretable machine leaning method that is able to achieve promising performance with low computational complexity on a variety of applications. The interpretability and the low computational complexity of the TM are inherited from the Boolean expressions for representing various sub-patterns. Although possessing favorable properties, TM has not been the go-to method for AI applications, mainly due to its conceptual and theoretical differences compared with perceptrons and neural networks, which are more widely known and well understood. In this paper, we provide detailed insights for the operational concept of the TM, and try to bridge the gap in the theoretical understanding between the perceptron and the TM. More specifically, we study the operational concept of the TM following the analytical structure of perceptrons, showing the resemblance between the perceptrons and the TM. Through the analysis, we indicated that the TM's weight update can be considered as a special case of the gradient weight update. We also perform an empirical analysis of TM by showing the flexibility in determining the clause length, visualization of decision boundaries and obtaining interpretable boolean expressions from TM. In addition, we also discuss the advantages of TM in terms of its structure and its ability to solve more complex problems.
translated by 谷歌翻译
The stock market prediction has been a traditional yet complex problem researched within diverse research areas and application domains due to its non-linear, highly volatile and complex nature. Existing surveys on stock market prediction often focus on traditional machine learning methods instead of deep learning methods. Deep learning has dominated many domains, gained much success and popularity in recent years in stock market prediction. This motivates us to provide a structured and comprehensive overview of the research on stock market prediction focusing on deep learning techniques. We present four elaborated subtasks of stock market prediction and propose a novel taxonomy to summarize the state-of-the-art models based on deep neural networks from 2011 to 2022. In addition, we also provide detailed statistics on the datasets and evaluation metrics commonly used in the stock market. Finally, we highlight some open issues and point out several future directions by sharing some new perspectives on stock market prediction.
translated by 谷歌翻译
To improve the performance of the dual-encoder retriever, one effective approach is knowledge distillation from the cross-encoder ranker. Existing works construct the candidate passages following the supervised learning setting where a query is paired with a positive passage and a batch of negatives. However, through empirical observation, we find that even the hard negatives from advanced methods are still too trivial for the teacher to distinguish, preventing the teacher from transferring abundant dark knowledge to the student through its soft label. To alleviate this issue, we propose ADAM, a knowledge distillation framework that can better transfer the dark knowledge held in the teacher with Adaptive Dark exAMples. Different from previous works that only rely on one positive and hard negatives as candidate passages, we create dark examples that all have moderate relevance to the query through mixing-up and masking in discrete space. Furthermore, as the quality of knowledge held in different training instances varies as measured by the teacher's confidence score, we propose a self-paced distillation strategy that adaptively concentrates on a subset of high-quality instances to conduct our dark-example-based knowledge distillation to help the student learn better. We conduct experiments on two widely-used benchmarks and verify the effectiveness of our method.
translated by 谷歌翻译